Goto

Collaborating Authors

 shared equilibria


Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

Training multi-agent systems (MAS) to achieve realistic equilibria gives us a useful tool to understand and model real-world systems. We consider a general sum partially observable Markov game where agents of different types share a single policy network, conditioned on agent-specific information. This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network. However, the nature of resulting equilibria reached by such agents has not been yet studied: we introduce the novel concept of Shared equilibrium as a symmetric pure Nash equilibrium of a certain Functional Form Game (FFG) and prove convergence to the latter for a certain class of games using self-play. In addition, it is important that such equilibria satisfy certain constraints so that MAS are calibrated to real world data for practical use: we solve this problem by introducing a novel dual-Reinforcement Learning based approach that fits emergent behaviors of agents in a Shared equilibrium to externally-specified targets, and apply our methods to a n-player market example. We do so by calibrating parameters governing distributions of agent types rather than individual agents, which allows both behavior differentiation among agents and coherent scaling of the shared policy network to multiple agents.


Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network.


Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

Training multi-agent systems (MAS) to achieve realistic equilibria gives us a useful tool to understand and model real-world systems. We consider a general sum partially observable Markov game where agents of different types share a single policy network, conditioned on agent-specific information. This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network. However, the nature of resulting equilibria reached by such agents has not been yet studied: we introduce the novel concept of Shared equilibrium as a symmetric pure Nash equilibrium of a certain Functional Form Game (FFG) and prove convergence to the latter for a certain class of games using self-play.


Review for NeurIPS paper: Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

Summary and Contributions: The paper presents the concept of shared equilibrium in certain kinds of multi agent stochastic games with a restricted form of partial observability. The formalism includes the notion of supertypes (different distributions of agents) and types (where each agents is given a true type each episode). The agent's type influences the rewards available as does the joint state of the system and joint action over all agents. One key constraint is that all agents of the same type follow the same policy from an egocentric perspective (where they themselves are the focal agent and all other agents are interchangeable). They define a policy gradient approach for individual agents, also present a higher order learning rule that shifts the distribution over supertypes at a slower timescale.


Review for NeurIPS paper: Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

The paper was refereed by 4 knowledgeable reviewers. All reviewers appreciated the contributions of the paper: - Formalization of self play and formal proof when it is guaranteed to converge - New algorithm for calibrating equilibria that is more effective than a naive use of BO. - Convincing results on a market agent scenario. The biggest concern that was discussed between the reviewers was the assumption of the extended transitivity. While this was addressed partially in the rebuttal, the authors should add a longer discussion in the paper for which games this assumption holds. However, after the discussion all reviewers agreed that the paper merits acceptance and I join this decision.


Calibration of Shared Equilibria in General Sum Partially Observable Markov Games

Neural Information Processing Systems

Training multi-agent systems (MAS) to achieve realistic equilibria gives us a useful tool to understand and model real-world systems. We consider a general sum partially observable Markov game where agents of different types share a single policy network, conditioned on agent-specific information. This paper aims at i) formally understanding equilibria reached by such agents, and ii) matching emergent phenomena of such equilibria to real-world targets. Parameter sharing with decentralized execution has been introduced as an efficient way to train multiple agents using a single policy network. However, the nature of resulting equilibria reached by such agents has not been yet studied: we introduce the novel concept of Shared equilibrium as a symmetric pure Nash equilibrium of a certain Functional Form Game (FFG) and prove convergence to the latter for a certain class of games using self-play.